adds the e2e coverage collector for the operator using codecov#138
adds the e2e coverage collector for the operator using codecov#138siddhibhor-56 wants to merge 1 commit intoopenshift:mainfrom
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: siddhibhor-56 The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
WalkthroughAdds end-to-end coverage support: a coverage-instrumented Docker image, a multi-stage Dockerfile to build it, a bash orchestration script with ChangesE2E Coverage integration
Sequence DiagramsequenceDiagram
participant CI as CI Runner
participant K8s as Kubernetes API
participant Operator as Operator Pod
participant PVC as Coverage PVC
participant Extractor as Short-lived Extractor Pod
participant Codecov as Codecov
CI->>K8s: run hack/e2e-coverage.sh setup (create PVC, patch CSV to use coverage image)
K8s->>Operator: rollout controller with coverage image, mount PVC, set GOCOVERDIR
CI->>K8s: run tests against instrumented Operator
CI->>K8s: run hack/e2e-coverage.sh collect (scale down operator)
K8s->>PVC: operator writes coverage files to PVC on shutdown
CI->>K8s: create Extractor Pod that mounts PVC
Extractor->>PVC: read coverage files
Extractor-->>CI: copy coverage artifacts to CI workspace
CI->>CI: convert covmeta -> go coverage profile
CI->>Codecov: optionally upload profile (if token present)
Codecov-->>CI: upload response
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes 🚥 Pre-merge checks | ✅ 10 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (10 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@images/ci/Dockerfile`:
- Around line 30-31: The Dockerfile currently uses "RUN mkdir -p /tmp/e2e-cover
&& chmod 777 /tmp/e2e-cover" which creates a world-writable directory; instead
create the directory, chown it to UID/GID 65534 and set restrictive permissions
(e.g. 0700 or 0750) so only the intended user can write; update the RUN that
creates /tmp/e2e-cover to use chown 65534:65534 and chmod 700 (or 750 if group
write is needed) rather than chmod 777.
In `@Makefile`:
- Around line 234-243: The kubectl cp invocation in the Makefile is causing a
nested e2e-cover directory, so change the kubectl cp usage to copy the contents
of /tmp/e2e-cover directly into $(E2E_COVERAGE_DIR); update the kubectl cp line
that references $(KUBECTL) cp "$(E2E_COVERAGE_NAMESPACE)/$$POD:/tmp/e2e-cover"
$(E2E_COVERAGE_DIR) to use a trailing dot on the source (e.g.
"$(E2E_COVERAGE_NAMESPACE)/$$POD:/tmp/e2e-cover/." $(E2E_COVERAGE_DIR)) so
covmeta and covcounters land directly in $(E2E_COVERAGE_DIR) for go tool covdata
to consume.
- Around line 245-248: Replace the unsafe "latest" downloader invocation in the
Makefile (the curl/chmod/./codecov sequence) with a pinned release workflow:
download a specific versioned uploader binary URL (not "latest") and its
corresponding SHA256SUM (or .sha256) file, verify the binary against the
checksum (using sha256sum -c or equivalent) before making it executable and
running it (the ./codecov invocation), and fail the step if verification fails;
keep the existing flags (--file=$(OUTPUTS_PATH)/coverage-e2e.out --flags=e2e
--name="E2E Coverage" --verbose) and still remove the binary afterwards. Ensure
the Makefile references the chosen version string so it is explicit and
reproducible.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 9ff32fe0-a38f-4c43-ab77-dfdadbf0a804
📒 Files selected for processing (2)
Makefileimages/ci/Dockerfile
| # Create the directory where Go will write coverage data at runtime. | ||
| RUN mkdir -p /tmp/e2e-cover && chmod 777 /tmp/e2e-cover |
There was a problem hiding this comment.
Avoid world-writable permissions for /tmp/e2e-cover.
chmod 777 weakens container security unnecessarily. Since the process runs as UID/GID 65534, set ownership and restrictive perms instead.
Suggested hardening
-RUN mkdir -p /tmp/e2e-cover && chmod 777 /tmp/e2e-cover
+RUN mkdir -p /tmp/e2e-cover && chown 65534:65534 /tmp/e2e-cover && chmod 700 /tmp/e2e-coverAs per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Create the directory where Go will write coverage data at runtime. | |
| RUN mkdir -p /tmp/e2e-cover && chmod 777 /tmp/e2e-cover | |
| # Create the directory where Go will write coverage data at runtime. | |
| RUN mkdir -p /tmp/e2e-cover && chown 65534:65534 /tmp/e2e-cover && chmod 700 /tmp/e2e-cover |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@images/ci/Dockerfile` around lines 30 - 31, The Dockerfile currently uses
"RUN mkdir -p /tmp/e2e-cover && chmod 777 /tmp/e2e-cover" which creates a
world-writable directory; instead create the directory, chown it to UID/GID
65534 and set restrictive permissions (e.g. 0700 or 0750) so only the intended
user can write; update the RUN that creates /tmp/e2e-cover to use chown
65534:65534 and chmod 700 (or 750 if group write is needed) rather than chmod
777.
There was a problem hiding this comment.
🧹 Nitpick comments (10)
test/utils/bitwarden_resources.go (2)
26-32: Constants are duplicated withaws_resources.go.The constants
externalSecretsAPIVersionV1andclusterSecretStoreKindare also defined intest/utils/aws_resources.go. Consider extracting these to a shared constants file within theutilspackage to avoid drift.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/utils/bitwarden_resources.go` around lines 26 - 32, The constants externalSecretsAPIVersionV1 and clusterSecretStoreKind are duplicated across bitwarden_resources.go and aws_resources.go; extract these into a shared utils package constants file (e.g., constants.go) and replace the local definitions in both bitwarden_resources.go and aws_resources.go with references to the new shared constants (externalSecretsAPIVersionV1, clusterSecretStoreKind), removing the duplicate declarations so both files import/use the single source of truth.
70-96: Consider handlingSetNestedFielderrors.While
SetNestedFieldtypically only fails if the path is structurally invalid (which won't happen with these compile-time known paths), explicitly checking the error would make the code more defensive.- _ = unstructured.SetNestedField(u.Object, []interface{}{ + if err := unstructured.SetNestedField(u.Object, []interface{}{ map[string]interface{}{ "secretKey": "value", "remoteRef": map[string]interface{}{ "key": remoteKey, }, }, - }, "spec", "data") + }, "spec", "data"); err != nil { + panic(fmt.Sprintf("failed to set spec.data: %v", err)) + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/utils/bitwarden_resources.go` around lines 70 - 96, The SetNestedField calls in BitwardenExternalSecretByName and BitwardenExternalSecretByUUID ignore returned errors; update both functions to capture the error from unstructured.SetNestedField and handle it (e.g., if err != nil { panic(fmt.Errorf("failed to set spec.data for %s: %w", name, err)) }) so failures are surfaced; reference the SetNestedField call sites in BitwardenExternalSecretByName and BitwardenExternalSecretByUUID and include the original error text in the panic/log message for diagnosability.test/utils/bitwarden_api_runner.go (1)
35-38: Consider pinning the curl image tag for reproducibility.Using
docker.io/curlimages/curl:latestcan lead to non-reproducible test behavior if the image is updated with breaking changes. Consider pinning to a specific version.const ( - bitwardenAPITestRunnerImage = "docker.io/curlimages/curl:latest" + bitwardenAPITestRunnerImage = "docker.io/curlimages/curl:8.7.1" bitwardenCredMountPath = "/etc/bitwarden-cred" )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/utils/bitwarden_api_runner.go` around lines 35 - 38, The constant bitwardenAPITestRunnerImage currently uses the mutable tag "docker.io/curlimages/curl:latest" which can cause flaky tests; change the value of bitwardenAPITestRunnerImage to a specific, immutable curlimages tag (e.g., a known semver like :8.x.y) and add a short comment noting why the tag is pinned; ensure any CI/test docs referencing this constant are updated when you intentionally upgrade the pinned tag.Makefile (2)
182-182: Add PHONY declaration for the targets on this line.The static analysis tool flagged that targets listed here should be declared as
.PHONYto ensure they're always executed regardless of whether a file with that name exists.+.PHONY: fmt vet test test-unit test-e2e e2e-coverage-publish run update-vendor update-dep fmt vet test test-unit test-e2e e2e-coverage-publish run update-vendor update-dep: GOFLAGS=🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Makefile` at line 182, The Makefile line defining the combined targets "fmt vet test test-unit test-e2e e2e-coverage-publish run update-vendor update-dep: GOFLAGS=" lacks a .PHONY declaration, so add a .PHONY rule listing these exact target names (fmt vet test test-unit test-e2e e2e-coverage-publish run update-vendor update-dep) elsewhere in the Makefile to ensure they always run regardless of files with those names; update the .PHONY block to include all these targets.
219-222: Codecov uploader pinning and SHA256 verification are correct.The implementation correctly pins v0.7.6 and verifies the SHA256 checksum before execution. v0.7.6 is a valid release. However, v0.8.0 is now the latest available version. If this version is not pinned for specific compatibility reasons, consider upgrading to v0.8.0 to benefit from any bug fixes or security patches.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Makefile` around lines 219 - 222, Update the pinned Codecov uploader by changing the CODECOV_VERSION variable from v0.7.6 to v0.8.0 and adjust the derived URLs (CODECOV_URL and CODECOV_SHA_URL) will automatically follow; if there's a deliberate compatibility constraint, add a brief comment next to CODECOV_VERSION explaining why v0.7.6 must remain pinned instead of upgrading to v0.8.0.test/utils/bitwarden.go (2)
182-201: Consider validating organization and project IDs.
FetchBitwardenCredsFromSecretvalidates thattokenis non-empty but silently returns empty strings fororgIDandprojectIDif they're missing. If these are required for Bitwarden API operations, consider validating them as well.if v, ok := secret.Data[BitwardenCredKeyOrgID]; ok { orgID = string(v) + } else { + return "", "", "", fmt.Errorf("bitwarden creds secret %s/%s missing required key %q", secretNamespace, secretName, BitwardenCredKeyOrgID) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/utils/bitwarden.go` around lines 182 - 201, FetchBitwardenCredsFromSecret currently only enforces token presence but silently allows orgID and projectID to be empty; update the function to validate BitwardenCredKeyOrgID and BitwardenCredKeyProjectID like TokenSecretKey and return a descriptive error if either is missing or empty (use the same error formatting pattern as the existing token check, referencing secretNamespace/secretName and the missing key names) so callers receive a clear failure when required organization or project IDs are absent.
326-341: Permissive error handling may mask connectivity issues.The reachability check treats
http_code=000(connection refused/timeout/TLS failure) and empty logs as success to avoid failing the suite. While this is pragmatic, it could mask real connectivity problems that would cause subsequent tests to fail with less informative errors.Consider at minimum logging a warning when these permissive paths are taken, so operators can investigate if tests fail later.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/utils/bitwarden.go` around lines 326 - 341, In the PodFailed case of the reachability check (the switch handling corev1.PodFailed), add a warning log before returning true on the permissive paths so connectivity issues aren't silently ignored: when getPodLogs(...) contains "http_code=000" or when logs contain "error on the server" or are empty/whitespace, call the test/logger warning function to record the pod/container identity (use formatPodContainerStatus(p) and podName/BitwardenOperandNamespace) and the raw logs, then return true,nil as before.test/utils/dynamic_resources.go (1)
112-127: Input mutation may surprise callers.
getResourceInterfacemutates the inputunstructuredObjby callingSetNamespace(Line 121). This side effect could be unexpected for callers who pass an object they intend to reuse. Consider documenting this behavior or working with a copy.func (d DynamicResourceLoader) getResourceInterface(unstructuredObj *unstructured.Unstructured, overrideNamespace string) dynamic.ResourceInterface { gvk := unstructuredObj.GroupVersionKind() gr, err := restmapper.GetAPIGroupResources(d.KubeClient.Discovery()) require.NoError(d.t, err) mapper := restmapper.NewDiscoveryRESTMapper(gr) mapping, err := mapper.RESTMapping(gvk.GroupKind(), gvk.Version) require.NoError(d.t, err) if mapping.Scope.Name() == meta.RESTScopeNameNamespace { if overrideNamespace != "" { - unstructuredObj.SetNamespace(overrideNamespace) + unstructuredObj.SetNamespace(overrideNamespace) // Note: mutates input object } require.NotEmpty(d.t, unstructuredObj.GetNamespace(), "Namespace can not be empty for namespaced resource") return d.DynamicClient.Resource(mapping.Resource).Namespace(unstructuredObj.GetNamespace()) } return d.DynamicClient.Resource(mapping.Resource) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/utils/dynamic_resources.go` around lines 112 - 127, The getResourceInterface function currently mutates the caller's unstructuredObj by calling SetNamespace on it when applying overrideNamespace; avoid surprising side-effects by operating on a copy: create a deep copy of unstructuredObj (e.g., via unstructuredObj.DeepCopy()), set the namespace on the copy rather than the original, and use the copy's GetNamespace when selecting the Resource Interface; alternatively, clearly document that getResourceInterface will mutate unstructuredObj if overrideNamespace is provided. Ensure references to SetNamespace, unstructuredObj, overrideNamespace and getResourceInterface are updated accordingly.test/utils/artifact_dump.go (1)
171-177: Consider pre-compiling the regex for sanitizeFilename.The regex is compiled on every call. While acceptable for e2e test utilities where this runs infrequently, pre-compiling at package level would be slightly more efficient.
♻️ Optional: Pre-compile regex
+var sanitizeFilenameRe = regexp.MustCompile(`[^a-zA-Z0-9_-]`) + func sanitizeFilename(s string) string { - s = regexp.MustCompile(`[^a-zA-Z0-9_-]`).ReplaceAllString(s, "_") + s = sanitizeFilenameRe.ReplaceAllString(s, "_") if len(s) > 100 { s = s[:100] } return s }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/utils/artifact_dump.go` around lines 171 - 177, The sanitizeFilename function currently compiles the regex on every call; move the regexp.MustCompile(`[^a-zA-Z0-9_-]`) to a package-level variable (e.g., filenameSanitizeRegex) and update sanitizeFilename to use filenameSanitizeRegex.ReplaceAllString(s, "_") so the regex is compiled once and behavior (truncation to 100 chars) remains unchanged.test/e2e/bitwarden_api_test.go (1)
139-144: Consider documenting why both 200 and 400 are acceptable.The test accepts either HTTP 200 or 400 for empty
idslist. Adding a brief comment explaining the expected API behavior variance would improve clarity.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/e2e/bitwarden_api_test.go` around lines 139 - 144, Add a short comment above the It("GET /rest/api/1/secrets-by-ids with empty ids should return 200 or 400", ...) test explaining why both HTTP 200 and 400 are accepted (e.g., different implementations may treat an empty ids list as a valid no-op returning 200 or as a client error returning 400), so future readers of the test (and maintainers of runAPIJob/script and the API contract) understand the intended variance; reference the test name, the script variable that posts '{"ids":[]}', and runAPIJob to locate the exact spot to insert the comment.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@Makefile`:
- Line 182: The Makefile line defining the combined targets "fmt vet test
test-unit test-e2e e2e-coverage-publish run update-vendor update-dep: GOFLAGS="
lacks a .PHONY declaration, so add a .PHONY rule listing these exact target
names (fmt vet test test-unit test-e2e e2e-coverage-publish run update-vendor
update-dep) elsewhere in the Makefile to ensure they always run regardless of
files with those names; update the .PHONY block to include all these targets.
- Around line 219-222: Update the pinned Codecov uploader by changing the
CODECOV_VERSION variable from v0.7.6 to v0.8.0 and adjust the derived URLs
(CODECOV_URL and CODECOV_SHA_URL) will automatically follow; if there's a
deliberate compatibility constraint, add a brief comment next to CODECOV_VERSION
explaining why v0.7.6 must remain pinned instead of upgrading to v0.8.0.
In `@test/e2e/bitwarden_api_test.go`:
- Around line 139-144: Add a short comment above the It("GET
/rest/api/1/secrets-by-ids with empty ids should return 200 or 400", ...) test
explaining why both HTTP 200 and 400 are accepted (e.g., different
implementations may treat an empty ids list as a valid no-op returning 200 or as
a client error returning 400), so future readers of the test (and maintainers of
runAPIJob/script and the API contract) understand the intended variance;
reference the test name, the script variable that posts '{"ids":[]}', and
runAPIJob to locate the exact spot to insert the comment.
In `@test/utils/artifact_dump.go`:
- Around line 171-177: The sanitizeFilename function currently compiles the
regex on every call; move the regexp.MustCompile(`[^a-zA-Z0-9_-]`) to a
package-level variable (e.g., filenameSanitizeRegex) and update sanitizeFilename
to use filenameSanitizeRegex.ReplaceAllString(s, "_") so the regex is compiled
once and behavior (truncation to 100 chars) remains unchanged.
In `@test/utils/bitwarden_api_runner.go`:
- Around line 35-38: The constant bitwardenAPITestRunnerImage currently uses the
mutable tag "docker.io/curlimages/curl:latest" which can cause flaky tests;
change the value of bitwardenAPITestRunnerImage to a specific, immutable
curlimages tag (e.g., a known semver like :8.x.y) and add a short comment noting
why the tag is pinned; ensure any CI/test docs referencing this constant are
updated when you intentionally upgrade the pinned tag.
In `@test/utils/bitwarden_resources.go`:
- Around line 26-32: The constants externalSecretsAPIVersionV1 and
clusterSecretStoreKind are duplicated across bitwarden_resources.go and
aws_resources.go; extract these into a shared utils package constants file
(e.g., constants.go) and replace the local definitions in both
bitwarden_resources.go and aws_resources.go with references to the new shared
constants (externalSecretsAPIVersionV1, clusterSecretStoreKind), removing the
duplicate declarations so both files import/use the single source of truth.
- Around line 70-96: The SetNestedField calls in BitwardenExternalSecretByName
and BitwardenExternalSecretByUUID ignore returned errors; update both functions
to capture the error from unstructured.SetNestedField and handle it (e.g., if
err != nil { panic(fmt.Errorf("failed to set spec.data for %s: %w", name, err))
}) so failures are surfaced; reference the SetNestedField call sites in
BitwardenExternalSecretByName and BitwardenExternalSecretByUUID and include the
original error text in the panic/log message for diagnosability.
In `@test/utils/bitwarden.go`:
- Around line 182-201: FetchBitwardenCredsFromSecret currently only enforces
token presence but silently allows orgID and projectID to be empty; update the
function to validate BitwardenCredKeyOrgID and BitwardenCredKeyProjectID like
TokenSecretKey and return a descriptive error if either is missing or empty (use
the same error formatting pattern as the existing token check, referencing
secretNamespace/secretName and the missing key names) so callers receive a clear
failure when required organization or project IDs are absent.
- Around line 326-341: In the PodFailed case of the reachability check (the
switch handling corev1.PodFailed), add a warning log before returning true on
the permissive paths so connectivity issues aren't silently ignored: when
getPodLogs(...) contains "http_code=000" or when logs contain "error on the
server" or are empty/whitespace, call the test/logger warning function to record
the pod/container identity (use formatPodContainerStatus(p) and
podName/BitwardenOperandNamespace) and the raw logs, then return true,nil as
before.
In `@test/utils/dynamic_resources.go`:
- Around line 112-127: The getResourceInterface function currently mutates the
caller's unstructuredObj by calling SetNamespace on it when applying
overrideNamespace; avoid surprising side-effects by operating on a copy: create
a deep copy of unstructuredObj (e.g., via unstructuredObj.DeepCopy()), set the
namespace on the copy rather than the original, and use the copy's GetNamespace
when selecting the Resource Interface; alternatively, clearly document that
getResourceInterface will mutate unstructuredObj if overrideNamespace is
provided. Ensure references to SetNamespace, unstructuredObj, overrideNamespace
and getResourceInterface are updated accordingly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml
Review profile: CHILL
Plan: Enterprise
Run ID: a9d37d64-68e3-464b-9b91-29402719f719
📒 Files selected for processing (20)
MakefileREADME.mdhack/govulncheck.shimages/ci/Dockerfile.codecoveragetest/README.mdtest/apis/README.mdtest/e2e/README.mdtest/e2e/bitwarden_api_test.gotest/e2e/bitwarden_es_test.gotest/e2e/e2e_suite_test.gotest/e2e/e2e_test.gotest/go.modtest/utils/artifact_dump.gotest/utils/aws_resources.gotest/utils/bitwarden.gotest/utils/bitwarden_api_runner.gotest/utils/bitwarden_resources.gotest/utils/cleanup.gotest/utils/conditions.gotest/utils/dynamic_resources.go
✅ Files skipped from review due to trivial changes (6)
- README.md
- test/README.md
- test/apis/README.md
- hack/govulncheck.sh
- test/go.mod
- test/e2e/README.md
3604751 to
8f666be
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@hack/e2e-coverage.sh`:
- Around line 97-118: The collect() function can leave the coverage-extractor
pod running if a later command fails; add a cleanup trap so the pod is always
removed on EXIT. Implement a small cleanup function (e.g., cleanup_extractor)
that runs oc delete pod coverage-extractor -n "${NAMESPACE}" --ignore-not-found
--wait=false 2>/dev/null || true, set trap 'cleanup_extractor' EXIT immediately
after creating the pod (right after the oc run/oc wait sequence), and
remove/reset the trap (trap - EXIT) once files are successfully copied and
before returning so normal completion doesn't trigger redundant cleanup.
- Around line 58-63: The CSV patch currently uses append ops to add env,
volumeMounts and volumes which will create duplicates if run twice; modify the
oc patch logic around oc patch "${csv}" to first check for an existing env entry
named GOCOVERDIR (or existing volume/volumeMount named coverage-data) and skip
the corresponding add ops if present, or alternatively replace the whole
container spec in
/spec/install/spec/deployments/0/spec/template/spec/containers/0 with a
sanitized container object that sets image to COVERAGE_IMAGE and sets env
GOCOVERDIR to GOCOVERDIR_PATH and the volumeMounts/volumes (using PVC_NAME),
ensuring you update the patch operations to use replace on the container array
or conditional logic to avoid appending duplicates.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml
Review profile: CHILL
Plan: Enterprise
Run ID: a6f2a1e6-94ca-4808-93c6-5d2746b62eb7
📒 Files selected for processing (3)
Makefilehack/e2e-coverage.shimages/ci/Dockerfile.coverage
| oc patch csv "${csv}" -n "${NAMESPACE}" --type=json -p "[ | ||
| {\"op\": \"replace\", \"path\": \"/spec/install/spec/deployments/0/spec/template/spec/containers/0/image\", \"value\": \"${COVERAGE_IMAGE}\"}, | ||
| {\"op\": \"add\", \"path\": \"/spec/install/spec/deployments/0/spec/template/spec/containers/0/env/-\", \"value\": {\"name\": \"GOCOVERDIR\", \"value\": \"${GOCOVERDIR_PATH}\"}}, | ||
| {\"op\": \"add\", \"path\": \"/spec/install/spec/deployments/0/spec/template/spec/containers/0/volumeMounts/-\", \"value\": {\"name\": \"coverage-data\", \"mountPath\": \"${GOCOVERDIR_PATH}\"}}, | ||
| {\"op\": \"add\", \"path\": \"/spec/install/spec/deployments/0/spec/template/spec/volumes/-\", \"value\": {\"name\": \"coverage-data\", \"persistentVolumeClaim\": {\"claimName\": \"${PVC_NAME}\"}}} | ||
| ]" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify non-idempotent append operations are present in the setup patch.
rg -n '"/spec/install/spec/deployments/0/spec/template/spec/containers/0/(env|volumeMounts)/-"|"/spec/install/spec/deployments/0/spec/template/spec/volumes/-"' hack/e2e-coverage.shRepository: openshift/external-secrets-operator
Length of output: 61
🏁 Script executed:
cat -n hack/e2e-coverage.sh | sed -n '50,70p'Repository: openshift/external-secrets-operator
Length of output: 1660
🏁 Script executed:
# Find the setup function and its callers
grep -n "setup()" hack/e2e-coverage.sh | head -20Repository: openshift/external-secrets-operator
Length of output: 92
🏁 Script executed:
# Search for main and overall flow
sed -n '1,100p' hack/e2e-coverage.shRepository: openshift/external-secrets-operator
Length of output: 4211
🏁 Script executed:
# View the end of the file to see script invocation logic
tail -50 hack/e2e-coverage.shRepository: openshift/external-secrets-operator
Length of output: 2209
Guard CSV patching against duplicate entries when re-running setup.
Lines 59–62 use append operations (/- suffix) to add env, volumeMounts, and volumes to the pod spec. Running setup twice will duplicate these entries, breaking the pod spec with duplicate environment variables and volume mounts. Add a check for the existing GOCOVERDIR entry before patching, or use a replace operation on the entire container array instead of appending.
Suggested approach
+ local already_patched
+ already_patched="$(oc get csv "${csv}" -n "${NAMESPACE}" -o jsonpath='{.spec.install.spec.deployments[0].spec.template.spec.containers[0].env[?(@.name=="GOCOVERDIR")].name}' 2>/dev/null || true)"
+ if [[ -n "${already_patched}" ]]; then
+ echo "CSV already patched for coverage; skipping patch step."
+ else
oc patch csv "${csv}" -n "${NAMESPACE}" --type=json -p "[
{\"op\": \"replace\", \"path\": \"/spec/install/spec/deployments/0/spec/template/spec/containers/0/image\", \"value\": \"${COVERAGE_IMAGE}\"},
{\"op\": \"add\", \"path\": \"/spec/install/spec/deployments/0/spec/template/spec/containers/0/env/-\", \"value\": {\"name\": \"GOCOVERDIR\", \"value\": \"${GOCOVERDIR_PATH}\"}},
{\"op\": \"add\", \"path\": \"/spec/install/spec/deployments/0/spec/template/spec/containers/0/volumeMounts/-\", \"value\": {\"name\": \"coverage-data\", \"mountPath\": \"${GOCOVERDIR_PATH}\"}},
{\"op\": \"add\", \"path\": \"/spec/install/spec/deployments/0/spec/template/spec/volumes/-\", \"value\": {\"name\": \"coverage-data\", \"persistentVolumeClaim\": {\"claimName\": \"${PVC_NAME}\"}}}
]"
+ fi📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| oc patch csv "${csv}" -n "${NAMESPACE}" --type=json -p "[ | |
| {\"op\": \"replace\", \"path\": \"/spec/install/spec/deployments/0/spec/template/spec/containers/0/image\", \"value\": \"${COVERAGE_IMAGE}\"}, | |
| {\"op\": \"add\", \"path\": \"/spec/install/spec/deployments/0/spec/template/spec/containers/0/env/-\", \"value\": {\"name\": \"GOCOVERDIR\", \"value\": \"${GOCOVERDIR_PATH}\"}}, | |
| {\"op\": \"add\", \"path\": \"/spec/install/spec/deployments/0/spec/template/spec/containers/0/volumeMounts/-\", \"value\": {\"name\": \"coverage-data\", \"mountPath\": \"${GOCOVERDIR_PATH}\"}}, | |
| {\"op\": \"add\", \"path\": \"/spec/install/spec/deployments/0/spec/template/spec/volumes/-\", \"value\": {\"name\": \"coverage-data\", \"persistentVolumeClaim\": {\"claimName\": \"${PVC_NAME}\"}}} | |
| ]" | |
| local already_patched | |
| already_patched="$(oc get csv "${csv}" -n "${NAMESPACE}" -o jsonpath='{.spec.install.spec.deployments[0].spec.template.spec.containers[0].env[?(@.name=="GOCOVERDIR")].name}' 2>/dev/null || true)" | |
| if [[ -n "${already_patched}" ]]; then | |
| echo "CSV already patched for coverage; skipping patch step." | |
| else | |
| oc patch csv "${csv}" -n "${NAMESPACE}" --type=json -p "[ | |
| {\"op\": \"replace\", \"path\": \"/spec/install/spec/deployments/0/spec/template/spec/containers/0/image\", \"value\": \"${COVERAGE_IMAGE}\"}, | |
| {\"op\": \"add\", \"path\": \"/spec/install/spec/deployments/0/spec/template/spec/containers/0/env/-\", \"value\": {\"name\": \"GOCOVERDIR\", \"value\": \"${GOCOVERDIR_PATH}\"}}, | |
| {\"op\": \"add\", \"path\": \"/spec/install/spec/deployments/0/spec/template/spec/containers/0/volumeMounts/-\", \"value\": {\"name\": \"coverage-data\", \"mountPath\": \"${GOCOVERDIR_PATH}\"}}, | |
| {\"op\": \"add\", \"path\": \"/spec/install/spec/deployments/0/spec/template/spec/volumes/-\", \"value\": {\"name\": \"coverage-data\", \"persistentVolumeClaim\": {\"claimName\": \"${PVC_NAME}\"}}} | |
| ]" | |
| fi |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@hack/e2e-coverage.sh` around lines 58 - 63, The CSV patch currently uses
append ops to add env, volumeMounts and volumes which will create duplicates if
run twice; modify the oc patch logic around oc patch "${csv}" to first check for
an existing env entry named GOCOVERDIR (or existing volume/volumeMount named
coverage-data) and skip the corresponding add ops if present, or alternatively
replace the whole container spec in
/spec/install/spec/deployments/0/spec/template/spec/containers/0 with a
sanitized container object that sets image to COVERAGE_IMAGE and sets env
GOCOVERDIR to GOCOVERDIR_PATH and the volumeMounts/volumes (using PVC_NAME),
ensuring you update the patch operations to use replace on the container array
or conditional logic to avoid appending duplicates.
| echo "Creating extractor pod to access PVC data..." | ||
| oc run coverage-extractor \ | ||
| --image="${EXTRACTOR_IMAGE}" \ | ||
| --restart=Never \ | ||
| --overrides="{ | ||
| \"spec\": { | ||
| \"volumes\": [{\"name\": \"cov\", \"persistentVolumeClaim\": {\"claimName\": \"${PVC_NAME}\"}}], | ||
| \"containers\": [{\"name\": \"coverage-extractor\", \"image\": \"${EXTRACTOR_IMAGE}\", | ||
| \"command\": [\"sleep\", \"600\"], | ||
| \"volumeMounts\": [{\"name\": \"cov\", \"mountPath\": \"${GOCOVERDIR_PATH}\"}] | ||
| }] | ||
| } | ||
| }" \ | ||
| -n "${NAMESPACE}" | ||
|
|
||
| oc wait pod/coverage-extractor --for=condition=Ready \ | ||
| -n "${NAMESPACE}" --timeout=120s | ||
|
|
||
| # Use /. suffix so oc cp places files directly in coverage_dir, not nested | ||
| mkdir -p "${coverage_dir}" | ||
| oc cp "${NAMESPACE}/coverage-extractor:${GOCOVERDIR_PATH}/." "${coverage_dir}" | ||
| oc delete pod coverage-extractor -n "${NAMESPACE}" --ignore-not-found --wait=false 2>/dev/null || true |
There was a problem hiding this comment.
Always clean up coverage-extractor via trap on failure paths.
If oc cp (Line 117) or any later command fails, set -e exits before the explicit delete at Line 118, leaving the extractor pod behind. Add an EXIT trap in collect() so cleanup is guaranteed.
Suggested hardening
collect() {
echo "--- E2E Coverage Collection ---"
+ cleanup_extractor() {
+ oc delete pod coverage-extractor -n "${NAMESPACE}" --ignore-not-found --wait=false 2>/dev/null || true
+ }
+ trap cleanup_extractor EXIT
@@
- oc delete pod coverage-extractor -n "${NAMESPACE}" --ignore-not-found --wait=false 2>/dev/null || true
+ cleanup_extractor
+ trap - EXIT🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@hack/e2e-coverage.sh` around lines 97 - 118, The collect() function can leave
the coverage-extractor pod running if a later command fails; add a cleanup trap
so the pod is always removed on EXIT. Implement a small cleanup function (e.g.,
cleanup_extractor) that runs oc delete pod coverage-extractor -n "${NAMESPACE}"
--ignore-not-found --wait=false 2>/dev/null || true, set trap
'cleanup_extractor' EXIT immediately after creating the pod (right after the oc
run/oc wait sequence), and remove/reset the trap (trap - EXIT) once files are
successfully copied and before returning so normal completion doesn't trigger
redundant cleanup.
|
@siddhibhor-56: The following test failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
Adds the e2e coverage collector for the operator using codecov
Summary by CodeRabbit